Deep stochastic optimization in finance
نویسندگان
چکیده
This paper outlines, and through stylized examples evaluates a novel highly effective computational technique in quantitative finance. Empirical Risk Minimization (ERM) neural networks are key to this approach. Powerful open source optimization libraries allow for efficient implementations of algorithm making it viable high-dimensional structures. The free-boundary problems related American Bermudan options showcase both the power potential difficulties that specific applications may face. impact size training data is studied simplified Merton type problem. classical option hedging problem exemplifies need market generators or large number simulations.
منابع مشابه
Duality and optimality conditions in stochastic optimization and mathematical finance
This article studies convex duality in stochastic optimization over finite discrete-time. The first part of the paper gives general conditions that yield explicit expressions for the dual objective in many applications in operations research and mathematical finance. The second part derives optimality conditions by combining general saddle-point conditions from convex duality with the dual repr...
متن کاملConvex Duality in Stochastic Optimization and Mathematical Finance
This paper proposes a general duality framework for the problem of minimizing a convex integral functional over a space of stochastic processes adapted to a given filtration. The framework unifies many well-known duality frameworks from operations research and mathematical finance. The unification allows the extension of some useful techniques from these two fields to a much wider class of prob...
متن کاملDeep Learning in Finance
We explore the use of deep learning hierarchical models for problems in financial prediction and classification. Financial prediction problems – such as those presented in designing and pricing securities, constructing portfolios, and risk management – often involve large data sets with complex data interactions that currently are difficult or impossible to specify in a full economic model. App...
متن کاملDistributed stochastic optimization for deep learning
We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin sche...
متن کاملDistributed stochastic optimization for deep learning (thesis)
We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin sche...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Digital finance
سال: 2022
ISSN: ['2524-6984', '2524-6186']
DOI: https://doi.org/10.1007/s42521-022-00074-6